Bayesian Learning of unobservable output aggregating multiple weak classifiers

نویسندگان

  • Badih Ghattas
  • Gonzalo Perera
  • Juan Piccini
چکیده

Abstract: In this paper we propose a bayesian classification rule in a context where the training sample is not complete. The input X is observable, but the output Y is not, and we only utilize the result of weak classifiers instead of Y . Our method is motivated by epidemiological and bioinformatics problems. We prove the consistency of our method and illustrate its efficiency using simulations. Although up to our knowledge there are no similar algorithms for unobservable output, we compared in our simulations to supervised approaches.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Bagging and n2-Classifiers Based on Rules Induced by MODLEM

An application of the rule induction algorithm MODLEM to construct multiple classifiers is studied. Two different such classifiers are considered: the bagging approach, where classifiers are generated from different samples of the learning set, and the n-classifier, which is specialized for solving multiple class learning problems. This paper reports results of an experimental comparison of the...

متن کامل

Bayesian Co-Boosting for Multi-modal Gesture Recognition

With the development of data acquisition equipment, more and more modalities become available for gesture recognition. However, there still exist two critical issues for multimodal gesture recognition: how to select discriminative features for recognition and how to fuse features from different modalities. In this paper, we propose a novel Bayesian Co-Boosting framework for multi-modal gesture ...

متن کامل

Study of Four Types of Learning Bayesian Networks Cases

As the combination of parameter learning and structure learning, learning Bayesian networks can also be examined, Parameter learning is estimation of the dependencies in the network. Structural learning is the estimation of the links of the network. In terms of whether the structure of the network is known and whether the variables are all observable, there are four types of learning Bayesian n...

متن کامل

Online Visual Tracking using Multiple Instance Learning with Instance Significance Estimation

Multiple Instance Learning (MIL) recently provides an appealing way to alleviate the drifting problem in visual tracking. Following the tracking-by-detection framework, an online MILBoost approach is developed that sequentially chooses weak classifiers by maximizing the bag likelihood. In this paper, we extend this idea towards incorporating the instance significance estimation into the online ...

متن کامل

Bayesian Averaging of Classifiers and the Overfitting Problem

Although Bayesian model averaging is theoretically the optimal method for combining learned models, it has seen very little use in machine learning. In this paper we study its application to combining rule sets, and compare it with bagging and partitioning, two popular but more ad hoc alternatives. Our experiments show that, surprisingly, Bayesian model averaging’s error rates are consistently ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013